In this paper, we investigate what information the LSTM captures about the hydrological system.
Abstract
Neural networks have been shown to be extremely effective rainfall-runoff models, where the river discharge is predicted from meteorological inputs. However, the question remains: what have these models learned? Is it possible to extract information about the learned relationships that map inputs to outputs, and do these mappings represent known hydrological concepts? Small-scale experiments have demonstrated that the internal states of long short-term memory networks (LSTMs), a particular neural network architecture predisposed to hydrological modelling, can be interpreted. By extracting the tensors which represent the learned translation from inputs (precipitation, temperature, and potential evapotranspiration) to outputs (discharge), this research seeks to understand what information the LSTM captures about the hydrological system. We assess the hypothesis that the LSTM replicates real-world processes and that we can extract information about these processes from the internal states of the LSTM. We examine the cell-state vector, which represents the memory of the LSTM, and explore the ways in which the LSTM learns to reproduce stores of water, such as soil moisture and snow cover. We use a simple regression approach to map the LSTM state vector to our target stores (soil moisture and snow). Good correlations (R2>0.8) between the probe outputs and the target variables of interest provide evidence that the LSTM contains information that reflects known hydrological processes comparable with the concept of variable-capacity soil moisture stores.
The implications of this study are threefold: (1) LSTMs reproduce known hydrological processes. (2) While conceptual models have theoretical assumptions embedded in the model a priori, the LSTM derives these from the data. These learned representations are interpretable by scientists. (3) LSTMs can be used to gain an estimate of intermediate stores of water such as soil moisture. While machine learning interpretability is still a nascent field and our approach reflects a simple technique for exploring what the model has learned, the results are robust to different initial conditions and to a variety of benchmarking experiments. We therefore argue that deep learning approaches can be used to advance our scientific goals as well as our predictive goals.
Paper
Code
The results of this paper were produced with the NeuralHydrology Python package. The exact snapshot to reproduce all results can be found in this Github repository. The pretrained models can be found here.
Citation
@Article{lees2021concept,
author = {Lees, T. and Reece, S. and Kratzert, F. and Klotz, D. and Gauch, M. and De Bruijn, J. and Kumar Sahu, R. and Greve, P. and Slater, L. and Dadson, S.},
title = {Hydrological Concept Formation inside Long Short-Term Memory (LSTM) networks},
journal = {Hydrology and Earth System Sciences Discussions},
volume = {2021},
year = {2021},
pages = {1--37},
doi = {10.5194/hess-2021-566}
}